1,743 research outputs found

    Improving Collection Understanding for Web Archives with Storytelling: Shining Light Into Dark and Stormy Archives

    Get PDF
    Collections are the tools that people use to make sense of an ever-increasing number of archived web pages. As collections themselves grow, we need tools to make sense of them. Tools that work on the general web, like search engines, are not a good fit for these collections because search engines do not currently represent multiple document versions well. Web archive collections are vast, some containing hundreds of thousands of documents. Thousands of collections exist, many of which cover the same topic. Few collections include standardized metadata. Too many documents from too many collections with insufficient metadata makes collection understanding an expensive proposition. This dissertation establishes a five-process model to assist with web archive collection understanding. This model aims to produce a social media story – a visualization with which most web users are familiar. Each social media story contains surrogates which are summaries of individual documents. These surrogates, when presented together, summarize the topic of the story. After applying our storytelling model, they summarize the topic of a web archive collection. We develop and test a framework to select the best exemplars that represent a collection. We establish that algorithms produced from these primitives select exemplars that are otherwise undiscoverable using conventional search engine methods. We generate story metadata to improve the information scent of a story so users can understand it better. After an analysis showing that existing platforms perform poorly for web archives and a user study establishing the best surrogate type, we generate document metadata for the exemplars with machine learning. We then visualize the story and document metadata together and distribute it to satisfy the information needs of multiple personas who benefit from our model. Our tools serve as a reference implementation of our Dark and Stormy Archives storytelling model. Hypercane selects exemplars and generates story metadata. MementoEmbed generates document metadata. Raintale visualizes and distributes the story based on the story metadata and the document metadata of these exemplars. By providing understanding immediately, our stories save users the time and effort of reading thousands of documents and, most importantly, help them understand web archive collections

    Avoiding Spoilers on Mediawiki Fan Sites Using Memento

    Get PDF
    A variety of fan-based wikis about episodic fiction (e.g., television shows, novels, movies) exist on the World Wide Web. These wikis provide a wealth of information about complex stories, but if readers are behind in their viewing they run the risk of encountering spoilers -- information that gives away key plot points before the intended time of the show\u27s writers. Enterprising readers might browse the wiki in a web archive so as to view the page prior to a specific episode date and thereby avoid spoilers. Unfortunately, due to how web archives choose the best page, it is still possible to see spoilers (especially in sparse archives). In this paper we discuss how to use Memento to avoid spoilers. Memento uses TimeGates to determine which best archived page to give back to the user, currently using a minimum distance heuristic. We quantify how this heuristic is inadequate for avoiding spoilers, analyzing data collected from fan wikis and the Internet Archive. We create an algorithm for calculating the probability of encountering a spoiler in a given wiki article. We conduct an experiment with 16 wiki sites for popular television shows. We find that 38% of those pages are unavailable in the Internet Archive. We find that when accessing fan wiki pages in the Internet Archive there is as much as a 66% chance of encountering a spoiler. Using sample access logs from the Internet Archive, we find that 19% of actual requests to the Wayback Machine for wikia.com pages ended in spoilers. We suggest the use of a different minimum distance heuristic, minpast, for wikis, using the desired datetime as an upper bound. Finally, we highlight the use of an extension for MediaWiki that utilizes this new heuristic and can be used to avoid spoilers. An unexpected revelation about Memento comes from the development of this extension. It turns out that an optimized two request-response Memento pattern for interacting with TimeGates does not perform well with MediaWiki, leading us to fall back to the original Memento pattern of three request-response pairs. We also conduct performance testing on the extension and show that it has a minimal impact on MediaWiki\u27s performance

    Discovering Image Usage Online: A Case Study With "Flatten the Curve''

    Full text link
    Understanding the spread of images across the web helps us understand the reuse of scientific visualizations and their relationship with the public. The "Flatten the Curve" graphic was heavily used during the COVID-19 pandemic to convey a complex concept in a simple form. It displays two curves comparing the impact on case loads for medical facilities if the populace either adopts or fails to adopt protective measures during a pandemic. We use five variants of the "Flatten the Curve" image as a case study for viewing the spread of an image online. To evaluate its spread, we leverage three information channels: reverse image search engines, social media, and web archives. Reverse image searches give us a current view into image reuse. Social media helps us understand a variant's popularity over time. Web archives help us see when it was preserved, highlighting a view of popularity for future researchers. Our case study leverages document URLs can be used as a proxy for images when studying the spread of images online.Comment: 6 pages, 5 figures, Presented as poster at JCDL 202

    Abstract Images Have Different Levels of Retrievability Per Reverse Image Search Engine

    Full text link
    Much computer vision research has focused on natural images, but technical documents typically consist of abstract images, such as charts, drawings, diagrams, and schematics. How well do general web search engines discover abstract images? Recent advancements in computer vision and machine learning have led to the rise of reverse image search engines. Where conventional search engines accept a text query and return a set of document results, including images, a reverse image search accepts an image as a query and returns a set of images as results. This paper evaluates how well common reverse image search engines discover abstract images. We conducted an experiment leveraging images from Wikimedia Commons, a website known to be well indexed by Baidu, Bing, Google, and Yandex. We measure how difficult an image is to find again (retrievability), what percentage of images returned are relevant (precision), and the average number of results a visitor must review before finding the submitted image (mean reciprocal rank). When trying to discover the same image again among similar images, Yandex performs best. When searching for pages containing a specific image, Google and Yandex outperform the others when discovering photographs with precision scores ranging from 0.8191 to 0.8297, respectively. In both of these cases, Google and Yandex perform better with natural images than with abstract ones achieving a difference in retrievability as high as 54\% between images in these categories. These results affect anyone applying common web search engines to search for technical documents that use abstract images.Comment: 20 pages; 7 figures; to be published in the proceedings of the Drawings and abstract Imagery: Representation and Analysis (DIRA) Workshop from ECCV 202

    Bringing Web Time Travel to MediaWiki: An Assessment of the Memento MediaWiki Extension

    Full text link
    We have implemented the Memento MediaWiki Extension Version 2.0, which brings the Memento Protocol to MediaWiki, used by Wikipedia and the Wikimedia Foundation. Test results show that the extension has a negligible impact on performance. Two 302 status code datetime negotiation patterns, as defined by Memento, have been examined for the extension: Pattern 1.1, which requires 2 requests, versus Pattern 2.1, which requires 3 requests. Our test results and mathematical review find that, contrary to intuition, Pattern 2.1 performs better than Pattern 1.1 due to idiosyncrasies in MediaWiki. In addition to implementing Memento, Version 2.0 allows administrators to choose the optional 200-style datetime negotiation Pattern 1.2 instead of Pattern 2.1. It also permits administrators the ability to have the Memento MediaWiki Extension return full HTTP 400 and 500 status codes rather than using standard MediaWiki error pages. Finally, version 2.0 permits administrators to turn off recommended Memento headers if desired. Seeing as much of our work focuses on producing the correct revision of a wiki page in response to a user's datetime input, we also examine the problem of finding the correct revisions of the embedded resources, including images, stylesheets, and JavaScript; identifying the issues and discussing whether or not MediaWiki must be changed to support this functionality.Comment: 23 pages, 18 figures, 9 tables, 17 listing

    Population Ecology and Epidemiology of Sea Lice in Canadian Waters

    Get PDF
    Sea lice are found on farmed and wild fish on both the west coast and east coast of Canada. The predominant species on both coasts is referred to as Lepeophtheirus salmonis but indications are that the two groups are genetically different. Caligus species are also found on both coasts, these too are different species: Caligus clemensi and C. elongatus, respectively. There has been extensive work on sea lice on both wild and farmed fish over the last decade. Research indicates that L. salmonis, commonly referred to as the salmon louse; may have a broader host range than commonly thought, infecting species such as the three-spine stickleback. The role of farmed salmon, particularly farmed Atlantic Salmon, as potential reservoirs of L. salmonis is accepted. What is still debated is the effect of sea lice infections on wild salmon populations, and whether the establishment of farm level treatment thresholds is the most appropriate method to manage the situation. There is indication that various Pacific salmon species have different tolerances to both L. salmonis and C. clemensi and the role of other non-salmon species in the ecology and epidemiology of sea lice still needs to be better researched. Published work on sea lice on farmed salmon on the East Coast is more limited; research on wild Atlantic Salmon even more so. This Research Document was presented and reviewed as part of the Canadian Science Advisory Secretariat (CSAS) National peer-review meeting, Sea Lice Monitoring and Non-Chemical Measures, held in Ottawa, Ontario, September 25-27, 2012. The objective of this peer-review meeting was to assess the state of knowledge and provide scientific advice on sea lice management measures, monitoring and interactions between cultured and wild fish
    • …
    corecore